55 research outputs found

    Complex Knowledge Base Question Answering: A Survey

    Full text link
    Knowledge base question answering (KBQA) aims to answer a question over a knowledge base (KB). Early studies mainly focused on answering simple questions over KBs and achieved great success. However, their performance on complex questions is still far from satisfactory. Therefore, in recent years, researchers propose a large number of novel methods, which looked into the challenges of answering complex questions. In this survey, we review recent advances on KBQA with the focus on solving complex questions, which usually contain multiple subjects, express compound relations, or involve numerical operations. In detail, we begin with introducing the complex KBQA task and relevant background. Then, we describe benchmark datasets for complex KBQA task and introduce the construction process of these datasets. Next, we present two mainstream categories of methods for complex KBQA, namely semantic parsing-based (SP-based) methods and information retrieval-based (IR-based) methods. Specifically, we illustrate their procedures with flow designs and discuss their major differences and similarities. After that, we summarize the challenges that these two categories of methods encounter when answering complex questions, and explicate advanced solutions and techniques used in existing work. Finally, we conclude and discuss several promising directions related to complex KBQA for future research.Comment: 20 pages, 4 tables, 7 figures. arXiv admin note: text overlap with arXiv:2105.1164

    Rethinking the Open-Loop Evaluation of End-to-End Autonomous Driving in nuScenes

    Full text link
    Modern autonomous driving systems are typically divided into three main tasks: perception, prediction, and planning. The planning task involves predicting the trajectory of the ego vehicle based on inputs from both internal intention and the external environment, and manipulating the vehicle accordingly. Most existing works evaluate their performance on the nuScenes dataset using the L2 error and collision rate between the predicted trajectories and the ground truth. In this paper, we reevaluate these existing evaluation metrics and explore whether they accurately measure the superiority of different methods. Specifically, we design an MLP-based method that takes raw sensor data (e.g., past trajectory, velocity, etc.) as input and directly outputs the future trajectory of the ego vehicle, without using any perception or prediction information such as camera images or LiDAR. Our simple method achieves similar end-to-end planning performance on the nuScenes dataset with other perception-based methods, reducing the average L2 error by about 20%. Meanwhile, the perception-based methods have an advantage in terms of collision rate. We further conduct in-depth analysis and provide new insights into the factors that are critical for the success of the planning task on nuScenes dataset. Our observation also indicates that we need to rethink the current open-loop evaluation scheme of end-to-end autonomous driving in nuScenes. Codes are available at https://github.com/E2E-AD/AD-MLP.Comment: Technical report. Code is availabl

    A novel image integration technology mapping system significantly reduces radiation exposure during ablation for a wide spectrum of tachyarrhythmias in children

    Get PDF
    ObjectiveRadiofrequency catheter ablation (RFCA) has evolved into an effective and safe technique for the treatment of tachyarrhythmia in children. Concerns about children and involved medical staff being exposed to radiation during the procedure should not be ignored. “Fluoroscopy integrated 3D mapping”, a new 3D non-fluoroscopic navigation system software (CARTO Univu Module) could reduce fluoroscopy during the procedure. However, there are few studies about the use of this new technology on children. In the present study, we analyzed the impact of the CARTO Univu on procedural safety and fluoroscopy in a wide spectrum of tachyarrhythmias as compared with CARTO3 alone.MethodsThe data of children with tachyarrhythmias who underwent RFCA from June 2018 to December 2021 were collected. The CARTO Univu was used for mapping and ablation in 200 cases (C3U group) [boys/girls (105/95), mean age (6.8 ± 3.7 years), mean body weight (29.4 ± 7.9 kg)], and the CARTO3 was used in 200 cases as the control group (C3 group) [male/female (103/97), mean age (7.2 ± 3.9 years), mean body weight (32.3 ± 19.0 kg)]. The arrhythmias were atrioventricular reentrant tachycardia (AVRT, n = 78), atrioventricular node reentrant tachycardia (AVNRT, n = 35), typical atrial flutter (AFL, n = 12), atrial tachycardia (AT, n = 20) and ventricular arrhythmias [VAs, premature ventricular complexes or ventricular tachycardia, n = 55].Results① There was no significant difference in the acute success rate, recurrence rate, and complication rate between the C3 and C3U groups [(94.5% vs. 95.0%); (6.3% vs. 5.3%); and (2.0% vs. 1.5%); P > 0.05]. ② The CARTO Univu reduced radiation exposure: fluoroscopy time: AVRT C3: 8.5 ± 7.2 min vs. C3U: 4.5 ± 2.9 min, P < 0.05; AVNRT C3: 10.7 ± 3.2 min vs. C3U: 4.3 ± 2.6 min, P < 0.05; AT C3: 15.7 ± 8.2 min vs. C3U: 4.5 ± 1.7 min, P < 0.05; AFL C3: 8.7 ± 3.2 min vs. C3U: 3.7 ± 2.7 min, P < 0.05; VAs C3: 7.7 ± 4.2 min vs. C3U: 3.9 ± 2.3 min, P < 0.05. Corresponding to the fluoroscopy time, the fluoroscopy dose was also reduced significantly. ③ In the C3U group, the fluoroscopy during VAs ablation was lower than that of other arrhythmias (P < 0.05).ConclusionThe usage of the “novel image integration technology” CARTO Univu might be safe and effective in RFCA for a wide spectrum of tachyarrhythmias in children, which could significantly reduce fluoroscopy and has a more prominent advantage for VAs ablation

    A Survey of Large Language Models

    Full text link
    Language is essentially a complex, intricate system of human expressions governed by grammatical rules. It poses a significant challenge to develop capable AI algorithms for comprehending and grasping a language. As a major approach, language modeling has been widely studied for language understanding and generation in the past two decades, evolving from statistical language models to neural language models. Recently, pre-trained language models (PLMs) have been proposed by pre-training Transformer models over large-scale corpora, showing strong capabilities in solving various NLP tasks. Since researchers have found that model scaling can lead to performance improvement, they further study the scaling effect by increasing the model size to an even larger size. Interestingly, when the parameter scale exceeds a certain level, these enlarged language models not only achieve a significant performance improvement but also show some special abilities that are not present in small-scale language models. To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size. Recently, the research on LLMs has been largely advanced by both academia and industry, and a remarkable progress is the launch of ChatGPT, which has attracted widespread attention from society. The technical evolution of LLMs has been making an important impact on the entire AI community, which would revolutionize the way how we develop and use AI algorithms. In this survey, we review the recent advances of LLMs by introducing the background, key findings, and mainstream techniques. In particular, we focus on four major aspects of LLMs, namely pre-training, adaptation tuning, utilization, and capacity evaluation. Besides, we also summarize the available resources for developing LLMs and discuss the remaining issues for future directions.Comment: ongoing work; 51 page

    Real-time Monitoring for the Next Core-Collapse Supernova in JUNO

    Full text link
    Core-collapse supernova (CCSN) is one of the most energetic astrophysical events in the Universe. The early and prompt detection of neutrinos before (pre-SN) and during the SN burst is a unique opportunity to realize the multi-messenger observation of the CCSN events. In this work, we describe the monitoring concept and present the sensitivity of the system to the pre-SN and SN neutrinos at the Jiangmen Underground Neutrino Observatory (JUNO), which is a 20 kton liquid scintillator detector under construction in South China. The real-time monitoring system is designed with both the prompt monitors on the electronic board and online monitors at the data acquisition stage, in order to ensure both the alert speed and alert coverage of progenitor stars. By assuming a false alert rate of 1 per year, this monitoring system can be sensitive to the pre-SN neutrinos up to the distance of about 1.6 (0.9) kpc and SN neutrinos up to about 370 (360) kpc for a progenitor mass of 30M⊙M_{\odot} for the case of normal (inverted) mass ordering. The pointing ability of the CCSN is evaluated by using the accumulated event anisotropy of the inverse beta decay interactions from pre-SN or SN neutrinos, which, along with the early alert, can play important roles for the followup multi-messenger observations of the next Galactic or nearby extragalactic CCSN.Comment: 24 pages, 9 figure

    PointDMS: An Improved Deep Learning Neural Network via Multi-Feature Aggregation for Large-Scale Point Cloud Segmentation in Smart Applications of Urban Forestry Management

    No full text
    Background: The development of laser measurement techniques is of great significance in forestry monitoring and park management in smart cities. It provides many conveniences for improving landscape planning efficiency and strengthening digital construction. However, capturing 3D point clouds in large-scale landscape environments is a complex task that generates massive amounts of unstructured data with characteristics such as randomness, rotational invariance, sparsity, and serious barriers. Methods: To improve the processing efficiency of intelligent devices for massive point clouds, we propose a novel deep learning neural network based on a multi-feature aggregation strategy. This network is designed to divide 3D laser point clouds in complex large-scale scenarios. Firstly, we utilize multiple terrestrial laser sensors to collect a large amount of data in open scenes such as parks, streets, and forests in urban environments. These data are integrated into a practical database called DMSdataset, which contains different information variables, densities, and dimensions. Then, an automatic block integrated with a multi-feature extractor is constructed to pre-process the unstructured point cloud data and standardize the datasets. Finally, a novel semantic segmentation framework called PointDMS is designed using 3D convolutional deep networks. PointDMS achieves a better segmentation performance of point clouds with a lightweight parameter structure. Here, “D” stands for deep network, “M” stands for multi-feature, and “S” stands for segmentation. Results: Extensive experiments on self-built datasets show that the proposed PointDMS achieves similar or better performance in point cloud segmentation compared to other methods. The overall identification accuracy of the proposed model is up to 93.5%, which is a 14% increase. Particularly for living wood objects, the average identification accuracy is up to 88.7%, which is, at least, an 8.2% increase. These results effectively prove that PointDMS is beneficial for 3D point cloud processing, division, and mining applications in urban forest environments. It demonstrates good robustness and generalization

    Non-Destructive Testing of Mechanical Properties of Solid Wood Panel Based on Partial Least Squares Structural Equation Modeling Transfer Method

    Get PDF
    Calibration transfer between near infrared (NIR) spectrometers is a subtle issue in the chemometrics and process industry. Similar instruments may generate strongly different spectral responses, and regression models developed on a first NIR system can rarely be used with spectra collected by a second apparatus. In this work, two novel methods based on Structural Equation Modeling (SEM), called Enhanced Feature Extraction Approaches for factor analysis (EFEA-FA) and Enhanced Feature Extraction Approaches for spectral space transformation (EFEA-SST), were proposed to perform calibration transfer between NIR spectrometers. They were applied to a NIR nondestructive testing model for solid wood panels mechanical properties. Four different standardization algorithms were evaluated for transferring solid wood panels quality databases between a portable NIRS (InGaAs)-array spectrometer (NIRquest512) and a HSI Camera (SPECIM FX17). The results showed that EFEA-SST yielded the best model evaluation metrics (R2 and Root Mean Square Error of Prediction (RMSEP)) values for tensile strength (RMSEP=11.309, R2=0.865) parameters, while EFEA-FA gave the best fit for flexural strength (RMSEP=10.653, R2=0. 912). These results suggest the potential of two novel quality parameters prediction methods based on spectral databases transferred between diverse NIRS spectrometers

    Influence of Synthetic Jets on Multiscale Features in Wall-Bounded Turbulence

    No full text
    This experimental research focuses on the impacts of submerged synthetic jets on a fully-developed turbulent boundary layer (TBL) under a drag reduction working case. Two-dimensional velocity vectors in the flow field are captured with the aid of a particle image velocimetry (PIV) system. Proper orthogonal decomposition (POD) analyses provide evidence that synthetic jets notably attenuate the induction effect of prograde vortex on the low-speed fluid in large-scale fluctuation velocity field, thereby weakening the bursting process of near-wall turbulent events. Furthermore, the introduced perturbance redistributes the turbulent kinetic energy (TKE) and concentrates the TKE onto small-scale coherent structures. Modal time coefficients in various orders of POD are divided into components of multiple frequency bands by virtue of complementary ensemble empirical mode decomposition (CEEMD). It is found that the turbulence signals are shifted from low-frequency to high-frequency bands thanks to synthetic jets, thus revealing the relationship between scales and frequency bands. One further method of scale decomposition is proposed, that is, the large-scale fluctuating flow field will be obtained after removing the high-frequency noise data with the help of continuous mean square error (CMSE) criterion

    Forecasting and Monitoring Smart Buildings with the Internet of Things, Digital Twins and Blockchain

    No full text
    Smart buildings have grown in numbers and capabilities worldwide. Smart buildings rely on the sensor networks and Internet of Things (IoT), which allows the development and optimization performance through various methods, including efficient energy management solutions and predictive technologies for future decisions. Smart buildings generate data in real-time, and human-based choices are often taken in real-time. IoT provides connectivity for the various sensors. Smart buildings must ensure the proper operation with minimum time and energy with regards to dynamic human demands and performance in realtime. Blockchain technology can ensure data integrity and regulated autonomous flow of information between users who have a varying degree of trust. Therefore, the combination of Blockchain and the IoT can achieve data integrity between IoT devices installed in the smart buildings where multiple humans are involved and have different non-overlapping expectations from the building’s environment and response. In this paper, a set of predictive technologies are analyzed, and their common parameters are shown to affect the possible prediction and visualization of the IoT data for users. A digital twin based on Blockchain is proposed where individual users owing some sensors can use and visualize processed data based on rules set by the owners in the Blockchain

    Structural model and capacity determination of underground reservoir in goaf: a case study of Shendong mining area in China

    No full text
    Abstract The large-scale extraction of coal resources in the western mining areas of China has resulted in a significant loss of water resources, which is a challenge for coordinating resource extraction with ecological preservation in the mining areas. Although underground reservoir technology can effectively solve this problem, measuring the storage capacity of underground reservoirs through engineering experiments is costly and time-consuming. Currently, there is a lack of accurate, reliable, and low-cost theoretical calculation solutions, which greatly restricts the promotion and application of underground reservoir technology. The theoretical calculation methods for underground reservoir capacity were studied based on parameters from the Shendong mining area in China. A water storage structure model for coal mine underground water reservoirs was established, taking into account the settlement boundaries of the bedrock and loose layers in shallow coal seams, based on the key layer theory and the spatial structure model of the mining roof. The mathematical expression for the load on the coal-rock mass in the goaf was derived considering the rock breaking characteristics of the mining roof. The model determined the range of each water storage area, including the zone of loose body, zone of gradual load, and the compacted zone, based on the strength of the water storage capacity. The key parameters for calculating the water storage capacity were determined using a modified model for shallow thick loose layers and thin bedrock movement. Finally, a calculation method for the storage capacity was obtained. Based on the real data from the 22,615 working face of a mine in the Shendong mining area, the water storage capacity of the underground reservoir in the goaf was jointly calculated using FLAC3D, Surfer 12.0 and the proposed calculation method. The calculated water storage capacity was approximately 1.0191 million m3. Although this result was 2.20% smaller than the on-site water pumping experiment data, it still verifies the feasibility of the above calculation method for determining the water storage capacity of underground water reservoirs
    • …
    corecore